Federated Optimization of ℓ0-norm Regularized Sparse Learning

نویسندگان

چکیده

Regularized sparse learning with the ℓ0-norm is important in many areas, including statistical and signal processing. Iterative hard thresholding (IHT) methods are state-of-the-art for nonconvex-constrained due to their capability of recovering true support scalability large datasets. The current theoretical analysis IHT assumes use centralized IID data. In realistic large-scale scenarios, however, data distributed, seldom IID, private edge computing devices at local level. Consequently, it required study property a federated environment, where update model individually communicate central server aggregation infrequently without sharing this paper, we propose first group methods: Federated Hard Thresholding (Fed-HT) (FedIter-HT) guarantees. We prove that both algorithms have linear convergence rate guarantee optimal estimator, which comparable classic methods, but decentralized, non-IID, unbalanced Empirical results demonstrate Fed-HT FedIter-HT outperform competitor—a distributed IHT, terms reducing objective values fewer communication rounds bandwidth requirements.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

L0-norm Sparse Graph-regularized SVD for Biclustering

Learning the “blocking” structure is a central challenge for high dimensional data (e.g., gene expression data). In [Lee et al., 2010], a sparse singular value decomposition (SVD) has been used as a biclustering tool to achieve this goal. However, this model ignores the structural information between variables (e.g., gene interaction graph). Although typical graph-regularized norm can incorpora...

متن کامل

Hardness of Approximation for Sparse Optimization with L0 Norm

In this paper, we consider sparse optimization problems with L0 norm penalty or constraint. We prove that it is strongly NP-hard to find an approximate optimal solution within certain error bound, unless P = NP. This provides a lower bound for the approximation error of any deterministic polynomialtime algorithm. Applying the complexity result to sparse linear regression reveals a gap between c...

متن کامل

Sparse LS-SVMs with L0 - norm minimization

Least-Squares Support Vector Machines (LS-SVMs) have been successfully applied in many classification and regression tasks. Their main drawback is the lack of sparseness of the final models. Thus, a procedure to sparsify LS-SVMs is a frequent desideratum. In this paper, we adapt to the LS-SVM case a recent work for sparsifying classical SVM classifiers, which is based on an iterative approximat...

متن کامل

Worst-Case Hardness of Approximation for Sparse Optimization with L0 Norm

In this paper, we consider sparse optimization problems with L0 norm penalty or constraint. We prove that it is strongly NP-hard to find an approximate optimal solution within certain error bound, unless P = NP. This provides a lower bound for the approximation error of any deterministic polynomialtime algorithm. Applying the complexity result to sparse linear regression reveals a gap between c...

متن کامل

Complex-valued sparse representation based on smoothed l0 norm

In this paper we present an algorithm for complex-valued sparse representation. In our previous work we presented an algorithm for Sparse representation based on smoothed norm. Here we extend that algorithm to complex-valued signals. The proposed algorithm is compared to FOCUSS algorithm and it is experimentally shown that the proposed algorithm is about two or three orders of magnitude faster ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Algorithms

سال: 2022

ISSN: ['1999-4893']

DOI: https://doi.org/10.3390/a15090319